IS

Doll, William J.

Topic Weight Topic Terms
1.166 instrument measurement factor analysis measuring measures dimensions validity based instruments construct measure conceptualization sample reliability
0.615 satisfaction information systems study characteristics data results using user related field survey empirical quality hypotheses
0.425 computing end-user center support euc centers management provided users user services organizations end satisfaction applications
0.230 approach analysis application approaches new used paper methodology simulation traditional techniques systems process based using
0.229 form items item sensitive forms variety rates contexts fast coefficients meaning higher robust scores hardware
0.227 mis problems article systems management edp managers organizations ;br> data survey application examines need experiences
0.213 management practices technology information organizations organizational steering role fashion effective survey companies firms set planning
0.179 development systems methodology methodologies information framework approach approaches paper analysis use presented applied assumptions based
0.155 production manufacturing marketing information performance systems level impact plant model monitor does strategies 500 unit
0.153 attributes credibility wikis tools wiki potential consequences gis potentially expectancy shaping exploring related anonymous attribute
0.145 information strategy strategic technology management systems competitive executives role cio chief senior executive cios sis
0.145 validity reliability measure constructs construct study research measures used scale development nomological scales instrument measurement
0.139 user involvement development users satisfaction systems relationship specific results successful process attitude participative implementation effective
0.125 business large organizations using work changing rapidly make today's available designed need increasingly recent manage
0.122 organizations new information technology develop environment challenges core competencies management environmental technologies development emerging opportunities
0.120 market competition competitive network markets firms products competing competitor differentiation advantage competitors presence dominant structure
0.118 model research data results study using theoretical influence findings theory support implications test collected tested
0.115 users end use professionals user organizations applications needs packages findings perform specialists technical computing direct

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Torkzadeh, Gholamreza 3 Raghunathan, T. S. 2 Xia, Weidong 2 Ahmed, Mesbah U. 1
DENG, XIAODONG 1 Gupta, Yash P. 1 Lim, Jeen-Su 1 Vonderembse, Mark A. 1
User satisfaction 3 Confirmatory factor analysis 2 End-user computing 2 end-user computing satisfaction 2
Management 2 application election criteria 1 credibility syndrome 1 computer integrated manufacturing 1
executive steering committee 1 factorial invariance 1 Higher-order factor models 1 IS success 1
IS research methodologies 1 instrument validation 1 MIS 1 MIS management 1
multiple criteria evaluation 1 Reliability 1 research methods 1 system planning objectives 1
systems development process 1 strategic planning 1 User information satisfaction 1 user attitudes 1
Validity 1

Articles (6)

The Meaning and Measurement of User Satisfaction: A Multigroup Invariance Analysis of the End-User Computing Satisfaction Instrument. (Journal of Management Information Systems, 2004)
Authors: Abstract:
    Although user satisfaction is widely used by researchers and practitioners to evaluate information system success, important issues related to its meaning and measurement across population subgroups have not been adequately resolved. To be most useful in decision-making, instruments like end-user computing satisfaction (EUCS), which are designed to evaluate system success, should be robust. That is, they should enable comparisons by providing equivalent measurement across diverse samples that represent the variety of conditions or population subgroups present in organizations. Using a sample of 1,166 responses, the EUCS instrument is tested for measurement invariance across four dimensions--respondent positions, types of application, hardware platforms, and modes of development. While the results suggest that the meaning of user satisfaction is context sensitive and differs across population subgroups, the 12 measurement items are invariant across all four dimensions. The 12-item summed scale enables researchers or practitioners to compare EUCS scores across the instrument's originally intended universe of applicability.
A Confirmatory Factor Analysis of the User Information Satisfaction Instrument. (Information Systems Research, 1995)
Authors: Abstract:
    The structure and dimensionality of the user information satisfaction (UIS) construct is an important theoretical issue that has received considerable attention. Building upon the work of Bailey and Pearson (1983), Ives et al. (1983) conduct an exploratory factor analysis and recommend a 13-item instrument (two indicators per item) for measuring user information satisfaction. Ives et al. also contend that UIS is comprised of three component measures (information product, EDP staff and services, and user knowledge or involvement). In a replication using exploratory techniques. Baroudi and Orlikowski (1988) confirm the three factor structure and support the diagnostic utility of the three factor model. Other researchers have suggested a need for caution in using the UIS instrument as a single measure of user satisfaction; they contend that the instrument's three components measure quite different dimensions whose antecedents and consequences should be studied separately. The acceptance of UIS as a standardized instrument requires confirmation that it explains and measures the user information satisfaction construct and its components. Based on a sample of 224 respondents, this research uses confirmatory factor analysis (LISREL) to test alternative models of underlying factor structure and assess the reliability and validity of factors and items. The results provide support for a revised UIS model with four first-order factors and one second-order (higher-order) factor. To cross-validate these results, the authors reexamine two data sets, including the original Baroudi and Orlikowski data, to assess the revised UIS model. The results show that the revised model provides better model-data fit in all three data sets. Thus, the evidence supports the use of: (1) the 13-item instrument as a measure of an overall UIS; and (2) four component factors for explaining the UIS construct.
A Confirmatory Factor Analysis of the End-User Computing Satisfaction Instrument. (MIS Quarterly, 1994)
Authors: Abstract:
    This article presents a confirmatory factor analysis of the end-user computing satisfaction index. User satisfaction is often thought to be the most important measure of success for information systems. This study uses LISREL VII to test hypothesis against sample data. The sample data was collected from 409 computer end-users from 18 different organizations using 139 computer applications including accounts payable, financial planning, inventory, and computer-aided design. The data was statistically analyzed for a variety of models.
The Measurement of End-User Computing Satisfaction. (MIS Quarterly, 1988)
Authors: Abstract:
    This article contrasts traditional versus end-user computing environments and reports on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Using a survey of 618 end users, the researchers conducted a factor analysis and modified the instrument. The results suggest a 12-item instrument that measures five components of end-user satisfaction -- content, accuracy, format, ease of use, and timeliness. Evidence of the instrument's discriminant validity is presented. Reliability and validity is assessed by nature and type of application. Finally, standards for evaluating end-user applications are presented, and the instrument's usefulness for achieving more precision in research questions is explored.
Forging a Partnership to Achieve Competitive Advantage: The CIM Challenge. (MIS Quarterly, 1987)
Authors: Abstract:
    Computer-integrated manufacturing (CIM) could well become the most important application of information technology for achieving competitive advantage. Recent advances in manufacturing and information technologies present promising new strategic alternatives for designing business systems. CIM enables firms to respond quickly to market changes, achieve economies of scope, and manage the complexity of a multi-product environment. Attaining the strategic benefits of CIM requires decision support to integrate marketing with design, design with manufacturing, and manufacturing with strategic positioning. This emerging technology requires a partnership of executives in engineering, manufacturing, marketing, and MIS who share a common vision of how CIM makes possible new approaches to designing business systems.
Diagnosing and Treating the Credibility Syndrome. (MIS Quarterly, 1983)
Authors: Abstract:
    The credibility syndrome is a special case of MIS development failure with a distinctive set of symptoms and causes. In credibility problems, the directors of data processing -- their management approaches and systems development philosophies -- play an important role.